National Grid ESO are required by our License and by the REMIT regulation (EU Regulation on wholesale Energy Market Integrity and Transparency) to monitor the market for suspicious activity relating to manipulation, insider trading, breach of Grid Code etc. Our current, manual, processes are not infinitely scalable or transferable as the market grows so greater automation and sophistication is required.
The development of a more sophisticated, Machine Learning (ML) based solution will be investigated to increase the efficiency of team activities and be scalable to new products and increasing market participant numbers.
Benefits
Incorporating more datasets, including the interaction between different marketplaces, detection of cross market mechanisms to manipulate ENCC (Electricity National Control Centre) decision making will be more readily identifiable. This may encourage more timely notification of changes in operating profiles and prices to the ENCC, making the plan more secure and reducing decision making pressure.
Furthermore, by applying machine learning techniques, anomaly detection can be individualised to the resource economics, size, and technology types, enabling market monitoring to identify anomalies across new technology types, and better support all market participants in improving compliance with market rules, without unintentional bias to larger BM Units that may result from standard rules-based alerting. This will become more important as the energy system has greater participation from small energy providers in the energy transition.
Learnings
Outcomes
The successful outcome of the project will be a new suite of tools which would allow for anomalies to be identified and investigated by the market monitoring team against the different REMIT principles. The tools are integrated into market monitoring processes and are understood and utilised within the team. At the end of WP2-WP4, a report describing the methods for finding and characterising anomalies for the type of manipulation alongside a proof-of-concept code for extracting the anomalies with visualisations will be produced. In the instance where the behaviour can’t be detected with the required confidence, the difficulties and the possible routes to improve the data will be identified.
Working with Hartree Innovation Centre we have created anomaly-based detection models for each of the focused possible types of market manipulation identified as part of the project scope. Throughout the progress of the project these models have been embedded into market monitoring processes incrementally as each piece of work has completed to generate more sophisticated alerts to review. This includes monitoring price levels of balancing mechanism units and creating a new process to assess cases against the new Inflexible Offer Licence Condition (IOLC).
Each work pack produced commented Jupyter Notebook scripts showing the methodology and steps for processing the data into the anomaly-based models as well as example outcomes of the model which were tested and feedback to Hartree Innovation Centre throughout the project. These scripts are fully configurable and with the comments throughout the scripts allows for easier future proofing if additions need to be made in the future following changes to the data or market.
Lessons Learnt
The conclusion to WP1 determined that some internal datasets would enhance the capability of the tools for monitoring the types of manipulation and there were fundamental differences in naming conventions between the ESO and public datasets selected. It was considered not beneficial to continue conducting analysis on all these datasets given the volume of data provided in WP1 and especially where the data provides a similar level of information. Therefore, it was concluded to focus on a subset of data from the ESO database with some public datasets for the development of the models in the future work packages.
Despite having WP1 as a dedicated exploratory analysis phase, one of the main lessons learnt was to ensure that there was sufficient time across the first half of each work pack to enable Hartree Innovation Centre to understand the additional new datasets they would be analysing and to understand our requirement. Given the complexities and scales of both the datasets and asks of the project, it would require time for Hartree to be able to digest and understand the data before being able to start conducting analysis and consequently build the anomaly-based models. This was an integral part of the process and thus we maintained our weekly catchups across the whole lifespan of the project to allow consistent feedback and questions to be asked.
The other key lesson learnt was being prepared for the following work packs especially in relation to the data extraction. The team would discuss ideas and determine what outcome from the next work pack would be an ideal and successful outcome as well as prepare materials for Hartee Innovation Centre to provide background information required for the delivery of the upcoming work pack. However, the duration of time it would take for the required data extraction was not always considered and thus work packs would be initiated without all data having been provided to Hartree Innovation Centre. This resulted in some small delays in the initial start of the work packs. Earlier consideration for the time intensity of data extraction and planning would have resolved this.